Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ceph RBD: Restore the filesystems UUID on the volume #12745

Merged
merged 2 commits into from
Jan 19, 2024

Conversation

roosterfish
Copy link
Contributor

Ceph RBD snapshots are a read-only logical copy of the parent volume. Rewriting the filesystems UUID on the snapshot itself fails.
That was observed in #12743

Fixes #12744

Ceph RBD snapshots are a read-only logical copy of the parent volume.
Rewriting the filesystems UUID on the snapshot itself fails.

Signed-off-by: Julian Pelizäus <julian.pelizaeus@canonical.com>
Signed-off-by: Julian Pelizäus <julian.pelizaeus@canonical.com>
@roosterfish roosterfish marked this pull request as ready for review January 19, 2024 10:52
Copy link
Member

@tomponline tomponline left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good and I'll merge this as its a vast improvement on whats there now.
However if you take a look at the LVM driver for this same functionality, it moves the target volume to a temporary name before restoring the snapshot to the target volume name so that if any of the restore or uuid regeneration steps fail it can delete the partially restored volume, swap the old target volume back and not leave the target volume in an corrupted state.

Is this something we can do with the ceph driver too?

@tomponline tomponline merged commit 9ffad53 into canonical:main Jan 19, 2024
26 checks passed
@roosterfish
Copy link
Contributor Author

This looks good and I'll merge this as its a vast improvement on whats there now. However if you take a look at the LVM driver for this same functionality, it moves the target volume to a temporary name before restoring the snapshot to the target volume name so that if any of the restore or uuid regeneration steps fail it can delete the partially restored volume, swap the old target volume back and not leave the target volume in an corrupted state.

Is this something we can do with the ceph driver too?

Thanks, I'll add a note. Especially the reverter pattern can be used more frequently. What I also found is that the VM's filesystem volume doesn't get restored. Looks to be missing also for other drivers if I am not mistaken.

@roosterfish roosterfish deleted the rbd_restore_uuid branch January 19, 2024 14:28
tomponline added a commit to tomponline/lxd-pkg-snap that referenced this pull request Feb 1, 2024
Lands fixes from:

 canonical/lxd#12745
 canonical/lxd#12777
 canonical/lxd#12805 (partial)

Signed-off-by: Thomas Parrott <thomas.parrott@canonical.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cannot restore Ceph RBD backed XFS and Btrfs volumes
2 participants